Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

removing dependency on setting the Keras backend #79

Merged
merged 5 commits into from
Sep 2, 2022
Merged

Conversation

hstojic
Copy link
Collaborator

@hstojic hstojic commented Aug 17, 2022

this PR addresses #76, fix was fairly straightforward, the rest was removing backend calls throughout the repo

there was a tiny bit discrepancy in some tests in GPflux/tests/integration/test_svgp_equivalence.py, couldn't pinpoint where it was coming from

while this avoids setting the backend, one still needs to be careful that float64 data is given to the model (which is the case in all the tests at the moment), @vdutor how do you want to go about this? just warn the user in docstrings and some notebooks, or implement data check in DeepGP class (perhaps somewhere else as well?)?

gpflux/models/deep_gp.py Outdated Show resolved Hide resolved
@@ -103,7 +106,7 @@ def __init__(
self.distribution_class = prior.__class__
self.encoder = encoder
self.compositor = (
compositor if compositor is not None else tf.keras.layers.Concatenate(axis=-1)
compositor if compositor is not None else tf.keras.layers.Concatenate(axis=-1, dtype=default_float())
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this needed to be passed an explicit dtype as well

(fit_adam, fit_adam, {}),
(fit_adam, keras_fit_adam, {}),
(fit_natgrad, fit_natgrad, {}),
(fit_natgrad, keras_fit_natgrad, dict(atol=1e-7)),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this test required lowering tolerance slightly...



@pytest.mark.parametrize(
"svgp_fitter, keras_fitter, tol_kw",
[
(fit_adam, _keras_fit_adam, {}),
(fit_natgrad, _keras_fit_natgrad, dict(atol=1e-8)),
(fit_natgrad, _keras_fit_natgrad, dict(atol=1e-6)),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this one as well... seems like there was an issue with this one already, so I assumed it might be ok?
but not sure where the small differences arise really, perhaps keras is still somewhere using float32, not sure how exactly to check that - any idea?

Comment on lines -32 to -33
tf.keras.backend.set_floatx("float64")

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didnt try to run this as it needed additional packages

@hstojic hstojic requested a review from st-- August 25, 2022 09:59
docs/notebooks/intro.py Show resolved Hide resolved
gpflux/layers/latent_variable_layer.py Outdated Show resolved Hide resolved
gpflux/layers/latent_variable_layer.py Outdated Show resolved Hide resolved
@hstojic hstojic requested a review from vdutor August 31, 2022 23:47
Copy link
Member

@vdutor vdutor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

gpflux/models/deep_gp.py Outdated Show resolved Hide resolved
gpflux/models/deep_gp.py Outdated Show resolved Hide resolved
@hstojic hstojic merged commit f95e1cb into develop Sep 2, 2022
@hstojic hstojic deleted the hstojic/dtype branch September 2, 2022 20:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants